Diverse data formats and ontologies of task-oriented dialogue (TOD) datasets hinder us from developing general dialogue models that perform well on many datasets and studying knowledge transfer between datasets. To address this issue, we present ConvLab-3, a flexible dialogue system toolkit based on a unified TOD data format. In ConvLab-3, different datasets are transformed into one unified format and loaded by models in the same way. As a result, the cost of adapting a new model or dataset is significantly reduced. Compared to the previous releases of ConvLab (Lee et al., 2019b; Zhu et al., 2020b), ConvLab-3 allows developing dialogue systems with much more datasets and enhances the utility of the reinforcement learning (RL) toolkit for dialogue policies. To showcase the use of ConvLab-3 and inspire future work, we present a comprehensive study with various settings. We show the benefit of pre-training on other datasets for few-shot fine-tuning and RL, and encourage evaluating policy with diverse user simulators.
translated by 谷歌翻译
We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthalmology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's $\rho$ correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain
translated by 谷歌翻译
强化学习(RL)为可以在现实世界中自主互动的培训代理提供了潜力。但是,一个关键限制是RL算法对核心超参数和网络体系结构选择的脆弱性。此外,诸如不断发展的训练数据和增加的代理复杂性等非平稳性意味着不同的超参数和体系结构在不同的训练点上可能是最佳的。这激发了Autorl,这是一种试图自动化这些设计选择的方法。一类突出的Autorl方法是基于人群的培训(PBT),这在几个大型设置中导致了令人印象深刻的表现。在本文中,我们介绍了PBT式方法中的两项新创新。首先,我们采用基于信任区域的贝叶斯优化,从而可以全面覆盖高维混合参数搜索空间。其次,我们表明,使用世代相传,我们还可以在一次训练中共同学习体系结构和超参数。利用新的高度可行的Brax物理引擎,我们表明这些创新导致了巨大的性能增长,在即时学习整个配置的同时,大大优于调谐基线。代码可在https://github.com/xingchenwan/bgpbt上找到。
translated by 谷歌翻译
双边姿势对称性是自闭症谱系障碍(ASD)的潜在风险标志物以及婴儿中先天性肌肉核核糖(CMT)的症状的关键作用,但是当前评估对称性的方法需要费力的临床专家评估。在本文中,我们开发了一个基于计算机视觉的婴儿对称评估系统,利用婴儿的3D人姿势估计。通过对人类角度和对称性评级的调查,我们的发现对我们的系统进行评估和校准,使这种评级表现出较低的评价者可靠性。为了纠正这一点,我们开发了一个贝叶斯的估计量,该估计量是从可犯错的人类评估者的概率图形模型中得出的。我们显示,在预测贝叶斯骨料标签方面,3D婴儿姿势估计模型可以在接收器工作特征曲线性能下实现68%的面积,而2D婴儿姿势估计模型仅为61%,而3D成人姿势估计模型的61%和60% ,强调了3D姿势和婴儿领域知识在评估婴儿身体对称性方面的重要性。我们的调查分析还表明,人类评分易受较高的偏见和不一致性的影响,因此,我们的最终基于3D姿势的对称评估系统是校准的,但没有直接受到贝叶斯汇总人类评分的直接监督,从而产生了更高的一致性和较低水平的水平和​​较低的水平。 LIMB间评估偏见。
translated by 谷歌翻译
图像回归任务,如骨矿物密度(BMD)估计和左心室喷射分数(LVEF)预测,在计算机辅助疾病评估中起重要作用。大多数深度回归方法用单一的回归损耗函数训练神经网络,如MSE或L1损耗。在本文中,我们提出了一种用于深度图像回归的第一个对比学习框架,即adacon,其包括通过新颖的自适应边缘对比损耗和回归预测分支的特征学习分支组成。我们的方法包含标签距离关系作为学习特征表示的一部分,这允许在下游回归任务中进行更好的性能。此外,它可以用作即插即用模块,以提高现有回归方法的性能。我们展示了adacon对来自X射线图像的骨矿物密度估计和来自超声心动图象的X射线图像和左心室喷射分数预测的骨矿物密度估计的有效性。 Adacon分别导致MAE在最先进的BMD估计和LVEF预测方法中相对提高3.3%和5.9%。
translated by 谷歌翻译
神经网络的越来越大的规模及其越来越多的应用空间对更高的能量和记忆有效的人工智能特定硬件产生了需求。 venues为了缓解主要问题,von neumann瓶颈,包括内存和近记忆架构,以及算法方法。在这里,我们利用磁隧道结(MTJ)的低功耗和固有的二进制操作来展示基于MTJ的无源阵列的神经网络硬件推断。通常,由于设备到装置的变化,写入误差,寄生电阻和非前沿,在性能下将训练的网络模型转移到推动的硬件。为了量化这些硬件现实的效果,我们将300个唯一重量矩阵解决方案的23个唯一的重量矩阵解决方案进行分类,以分类葡萄酒数据集,用于分类准确性和写真保真度。尽管设备不完美,我们可以实现高达95.3%的软件等效精度,并在15 x 15 MTJ阵列中正确调整具有一系列设备尺寸的阵列。此调谐过程的成功表明,需要新的指标来表征混合信号硬件中再现的网络的性能和质量。
translated by 谷歌翻译
图表神经网络,一种流行的模型,在各种基于图形的学习任务中有效,已被证明易受对抗攻击的影响。虽然大多数文献侧重于节点级分类任务中的这种脆弱性,但很少努力致力于分析对图形级分类的对抗攻击,这是生物化学和社会网络分析等众多现实生活应用的重要问题。少数现有方法通常需要不切实际的设置,例如访问受害者模型的内部信息,或者是一个不切实际的查询。我们提出了一种新型贝叶斯优化的攻击方法,用于图形分类模型。我们的方法是黑匣子,查询效率和涉及扰动的效率和解析。我们经验验证了所提出的方法对涉及不同图形属性,约束和攻击方式的图形分类任务的效果和灵活性。最后,我们分析了产生的对手样本后面的常见可解释模式,这可能会在图形分类模型的对抗鲁棒性上流出进一步的光。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译